355 research outputs found

    Bidimensionality of Geometric Intersection Graphs

    Full text link
    Let B be a finite collection of geometric (not necessarily convex) bodies in the plane. Clearly, this class of geometric objects naturally generalizes the class of disks, lines, ellipsoids, and even convex polygons. We consider geometric intersection graphs GB where each body of the collection B is represented by a vertex, and two vertices of GB are adjacent if the intersection of the corresponding bodies is non-empty. For such graph classes and under natural restrictions on their maximum degree or subgraph exclusion, we prove that the relation between their treewidth and the maximum size of a grid minor is linear. These combinatorial results vastly extend the applicability of all the meta-algorithmic results of the bidimensionality theory to geometrically defined graph classes

    Data Sketches for Disaggregated Subset Sum and Frequent Item Estimation

    Full text link
    We introduce and study a new data sketch for processing massive datasets. It addresses two common problems: 1) computing a sum given arbitrary filter conditions and 2) identifying the frequent items or heavy hitters in a data set. For the former, the sketch provides unbiased estimates with state of the art accuracy. It handles the challenging scenario when the data is disaggregated so that computing the per unit metric of interest requires an expensive aggregation. For example, the metric of interest may be total clicks per user while the raw data is a click stream with multiple rows per user. Thus the sketch is suitable for use in a wide range of applications including computing historical click through rates for ad prediction, reporting user metrics from event streams, and measuring network traffic for IP flows. We prove and empirically show the sketch has good properties for both the disaggregated subset sum estimation and frequent item problems. On i.i.d. data, it not only picks out the frequent items but gives strongly consistent estimates for the proportion of each frequent item. The resulting sketch asymptotically draws a probability proportional to size sample that is optimal for estimating sums over the data. For non i.i.d. data, we show that it typically does much better than random sampling for the frequent item problem and never does worse. For subset sum estimation, we show that even for pathological sequences, the variance is close to that of an optimal sampling design. Empirically, despite the disadvantage of operating on disaggregated data, our method matches or bests priority sampling, a state of the art method for pre-aggregated data and performs orders of magnitude better on skewed data compared to uniform sampling. We propose extensions to the sketch that allow it to be used in combining multiple data sets, in distributed systems, and for time decayed aggregation

    An O(n^3)-Time Algorithm for Tree Edit Distance

    Full text link
    The {\em edit distance} between two ordered trees with vertex labels is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes. In this paper, we present a worst-case O(n3)O(n^3)-time algorithm for this problem, improving the previous best O(n3logn)O(n^3\log n)-time algorithm~\cite{Klein}. Our result requires a novel adaptive strategy for deciding how a dynamic program divides into subproblems (which is interesting in its own right), together with a deeper understanding of the previous algorithms for the problem. We also prove the optimality of our algorithm among the family of \emph{decomposition strategy} algorithms--which also includes the previous fastest algorithms--by tightening the known lower bound of Ω(n2log2n)\Omega(n^2\log^2 n)~\cite{Touzet} to Ω(n3)\Omega(n^3), matching our algorithm's running time. Furthermore, we obtain matching upper and lower bounds of Θ(nm2(1+lognm))\Theta(n m^2 (1 + \log \frac{n}{m})) when the two trees have different sizes mm and~nn, where m<nm < n.Comment: 10 pages, 5 figures, 5 .tex files where TED.tex is the main on

    Automatically Generating and Solving Eternity II Style Puzzles

    Get PDF

    Locked and Unlocked Polygonal Chains in 3D

    Get PDF
    In this paper, we study movements of simple polygonal chains in 3D. We say that an open, simple polygonal chain can be straightened if it can be continuously reconfigured to a straight sequence of segments in such a manner that both the length of each link and the simplicity of the chain are maintained throughout the movement. The analogous concept for closed chains is convexification: reconfiguration to a planar convex polygon. Chains that cannot be straightened or convexified are called locked. While there are open chains in 3D that are locked, we show that if an open chain has a simple orthogonal projection onto some plane, it can be straightened. For closed chains, we show that there are unknotted but locked closed chains, and we provide an algorithm for convexifying a planar simple polygon in 3D with a polynomial number of moves.Comment: To appear in Proc. 10th ACM-SIAM Sympos. Discrete Algorithms, Jan. 199

    Reconstructing David Huffman's Origami Tessellations

    Get PDF
    David A. Huffman (1925–1999) is best known in computer science for his work in information theory, particularly Huffman codes, and best known in origami as a pioneer of curved-crease folding. But during his early paper folding in the 1970s, he also designed and folded over a 100 different straight-crease origami tessellations. Unlike most origami tessellations designed in the past 20 years, Huffman's straight-crease tessellations are mostly three-dimensional, rigidly foldable, and have no locking mechanism. In collaboration with Huffman's family, our goal is to document all of his designs by reverse-engineering his models into the corresponding crease patterns, or in some cases, matching his models with his sketches of crease patterns. Here, we describe several of Huffman's origami tessellations that are most interesting historically, mathematically, and artistically.National Science Foundation (U.S.) (Origami Design for Integration of Self-assembling Systems for Engineering Innovation Grant EFRI-1240383)National Science Foundation (U.S.) (Expedition Grant CCF-1138967

    Contraction Bidimensionality: the Accurate Picture

    Get PDF
    We provide new combinatorial theorems on the structure of graphs that are contained as contractions in graphs of large treewidth. As a consequence of our combinatorial results we unify and significantly simplify contraction bidimensionality theory -- the meta algorithmic framework to design efficient parameterized and approximation algorithms for contraction closed parameters

    On the Structure of Equilibria in Basic Network Formation

    Full text link
    We study network connection games where the nodes of a network perform edge swaps in order to improve their communication costs. For the model proposed by Alon et al. (2010), in which the selfish cost of a node is the sum of all shortest path distances to the other nodes, we use the probabilistic method to provide a new, structural characterization of equilibrium graphs. We show how to use this characterization in order to prove upper bounds on the diameter of equilibrium graphs in terms of the size of the largest kk-vicinity (defined as the the set of vertices within distance kk from a vertex), for any k1k \geq 1 and in terms of the number of edges, thus settling positively a conjecture of Alon et al. in the cases of graphs of large kk-vicinity size (including graphs of large maximum degree) and of graphs which are dense enough. Next, we present a new swap-based network creation game, in which selfish costs depend on the immediate neighborhood of each node; in particular, the profit of a node is defined as the sum of the degrees of its neighbors. We prove that, in contrast to the previous model, this network creation game admits an exact potential, and also that any equilibrium graph contains an induced star. The existence of the potential function is exploited in order to show that an equilibrium can be reached in expected polynomial time even in the case where nodes can only acquire limited knowledge concerning non-neighboring nodes.Comment: 11 pages, 4 figure

    Integrated music and math projects in secondary education

    Get PDF
    The introduction of projects involving music and mathematics in Secondary Education should allow the integration of these disciplines by nonspecialists. In the present work we describe an experience carried out with future mathematics teachers with solid scientific-technical training, but little musical training. This pilot contributes with concrete orientations and results para the creation and development of STEAM activities, which can be found in [5].This research work has been carried out within the framework of the project EDU2017-84979-R, of the Spanish State Program of R&D and Innovation Oriented to the Challenges of the Society

    Beyond Worst-Case Analysis for Joins with Minesweeper

    Full text link
    We describe a new algorithm, Minesweeper, that is able to satisfy stronger runtime guarantees than previous join algorithms (colloquially, `beyond worst-case guarantees') for data in indexed search trees. Our first contribution is developing a framework to measure this stronger notion of complexity, which we call {\it certificate complexity}, that extends notions of Barbay et al. and Demaine et al.; a certificate is a set of propositional formulae that certifies that the output is correct. This notion captures a natural class of join algorithms. In addition, the certificate allows us to define a strictly stronger notion of runtime complexity than traditional worst-case guarantees. Our second contribution is to develop a dichotomy theorem for the certificate-based notion of complexity. Roughly, we show that Minesweeper evaluates β\beta-acyclic queries in time linear in the certificate plus the output size, while for any β\beta-cyclic query there is some instance that takes superlinear time in the certificate (and for which the output is no larger than the certificate size). We also extend our certificate-complexity analysis to queries with bounded treewidth and the triangle query.Comment: [This is the full version of our PODS'2014 paper.
    corecore